Multiclass Classification
   HOME

TheInfoList



OR:

In
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
and
statistical classification When classification is performed by a computer, statistical methods are normally used to develop the algorithm. Often, the individual observations are analyzed into a set of quantifiable properties, known variously as explanatory variables or ''f ...
, multiclass classification or multinomial classification is the problem of classifying instances into one of three or more classes (classifying instances into one of two classes is called
binary classification Binary classification is the task of classifying the elements of a set into one of two groups (each called ''class''). Typical binary classification problems include: * Medical testing to determine if a patient has a certain disease or not; * Qual ...
). For example, deciding on whether an image is showing a banana, peach, orange, or an apple is a multiclass classification problem, with four possible classes (banana, peach, orange, apple), while deciding on whether an image contains an apple or not is a binary classification problem (with the two possible classes being: apple, no apple). While many classification algorithms (notably
multinomial logistic regression In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the prob ...
) naturally permit the use of more than two classes, some are by nature
binary Binary may refer to: Science and technology Mathematics * Binary number, a representation of numbers using only two values (0 and 1) for each digit * Binary function, a function that takes two arguments * Binary operation, a mathematical op ...
algorithms; these can, however, be turned into multinomial classifiers by a variety of strategies. Multiclass classification should not be confused with
multi-label classification In machine learning, multi-label classification or multi-output classification is a variant of the statistical classification, classification problem where multiple nonexclusive labels may be assigned to each instance. Multi-label classification ...
, where multiple labels are to be predicted for each instance (e.g., predicting that an image contains both an apple and an orange, in the previous example).


Better-than-random multiclass models

From the confusion matrix of a multiclass model, we can determine whether a model does better than chance. Let K \geq 3 be the number of classes, \mathcal a set of observations, \hat: \mathcal \to \ a model of the target variable y: \mathcal \to \ and n_ be the number of observations in the set \ \cap \. We note n_ = \sum_j n_, n_ = \sum_i n_, n = \sum_j n_ = \sum_i n_, \lambda_i = \frac and \mu_j = \frac. It is assumed that the confusion matrix (n_)_ contains at least one non-zero entry in each row, that is \lambda_i > 0 for any i. Finally we call "normalized confusion matrix" the matrix of conditional probabilities (\mathbb(\hat=j \mid y=i))_ = \left(\frac\right)_ .


Intuitive explanation

The lift is a way of measuring the deviation from independence of two events A and B : \mathrm(A,B)= \frac = \frac = \frac We have \mathrm(A,B) > 1 if and only if events A and B occur simultaneously with a greater probability than if they were independent. In other words, if one of the two events occurs, the probability of observing the other event increases. A first condition to satisfy is to have \mathrm(y=i, \hat=i) \geq 1 for any i. And the quality of a model (better or worse than chance) does not change if we over- or undersample the dataset, that is if we multiply each row R_i of the confusion matrix by a constant c_i. Thus the second condition is that the necessary and sufficient conditions for doing better than chance need only depend on the normalized confusion matrix. The condition on lifts can be reformulated with One versus Rest binary models : for any i, we define the binary target variable y_i which is the indicator of event \, and the binary model \hat_i of y_i which is the indicator of event \. Each of the \hat_i models is a "One versus Rest" model. \mathrm(y=i, \hat=i) only depends on the events \ and \, so merging or not merging the other classes doesn't change its value. We therefore have \mathrm(y=i, \hat=i) = \mathrm(y_i=1, \hat_i=1) and the first condition is that all binary One versus Rest models are better than chance.


Example

If K=2 and 2 is the class of interest , the normalized confusion matrix is \begin \mathrm & 1-\mathrm \\ 1-\mathrm & \mathrm \end and we have \mathrm(y=1, \hat=1)-1 = \frac-1 = \frac-1 = \frac= \frac. Thus \mathrm(y=1, \hat=1)\geq 1 \iff n_ n_ - n_ n_\geq 0. Similarly, by swapping the roles of 1 and 2, we find that \mathrm(y=2, \hat=2) \geq 1 \iff n_ n_ - n_ n_ \geq 0. Dividing by n_ n_ we find that the necessary and sufficient condition on the normalized confusion matrix is \mathrm \ \mathrm - (1-\mathrm) (1-\mathrm) \geq 0 \iff \mathrm + \mathrm -1 \geq 0 \iff J \geq 0. This brings us back to the classical binary condition: Youden's J must be positive (or zero for random models).


Random models

A random model is a model that is independent of the target variable. This property is easily reformulated with the confusion matrix. This proposition shows that the model \hat of y is uninformative if and only if there are two families of numbers (\alpha_i)_i and (\beta_j)_j such that \mathbb(\ \cap \) = \alpha_i \beta_j for any i and j.


Multiclass likelihood ratios and diagnostic odds ratios

We define generalized likelihood ratios calculated from the normalized confusion matrix: for any i and j\not=i, let \mathrm_ = \frac. When K=2, if 2 is the class of interest,, we find the classical likelihood ratios \mathrm_= \mathrm_+ and \mathrm_= \frac. Multiclass diagnostic odds ratios can also be defined using the formula \mathrm_ = \mathrm_ = \mathrm_ \mathrm_ = \frac= \frac We saw above that a better-than-chance model (or a random model) must verify \mathrm(y=i, \hat=i) \geq 1 for any i and \lambda_i. According to the previous corollary, likelihood ratios are thus greater than or equal to 1. Conversely, if the likelihood ratios are greater than or equal to 1, the theorem shows that we have \mathrm(y=i,\hat=i) \geq 1 for any i and \lambda_i.


Definition of better-than-chance multiclass models

A model \hat of y outperforms chance if the following conditions are met: * For any j, we have \max_i \mathbb( \hat=j \mid y=i) = \mathbb( \hat=j \mid y=j). * There are i and j distinct such that \mathbb( \hat=j \mid y=i) < \mathbb( \hat=j \mid y=j). If all the entries of the confusion matrix are non-zero, this means that all the likelihood ratios are greater than or equal to 1, and at least one of these inegalities is strict. A model that satisfies the first condition but not the second is random, since we then have \mathbb( \ \cap \) = \mathbb(y=i) \mathbb( \hat=j \mid y=i) = \mathbb(y=i) \mathbb( \hat=j \mid y=j) = \alpha_i \beta_j for any i and j. We can rewrite the first condition in a more familiar way, noting x the observed value of \hat, \theta the value to be estimated of y and \hat(x) the set argmax_\theta \mathbb(x \mid \theta): for any x we have x \in \hat(x). We deduce that a model is better-than-random or random if and only if it is a
maximum likelihood estimator In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
of the target variable
.


Applications


Multiclass balanced accuracy

The performance of a better-than-chance model can be estimated using multiclass versions of metrics such as balanced accuracy or Youden's J. If \mathrm = 1, in other words J=1, the model is perfect. And for any random model, we have \mathrm = \frac (if, for example, we draw a uniform random number from the K labels, we have exactly one chance in K of predicting the correct value of the target variable). On a balanced data set (\lambda_i = \frac for any i), balanced accuracy is equal to the rate of well-classified observations. On any data set, if a model does better than chance, we have J \geq 0 and \mathrm \geq \frac. But the converse is not true when K>2, as we can see from this example: the confusion matrix \begin 0 & 3 & 0\\ 1 & 2 & 0\\ 0 & 0 & 3 \end is that of a bad model (=worse than chance) since \mathrm_=0. However, 5 of the 9 observations are correctly classified. This also shows that poor model performance on one of the modalities is not compensated for by good performance on the other modalities.


ROC space

The set of normalized confusion matrices is called the ROC space, a subspace of \mathopen0,1\mathclose^. If E denotes the subset of the ROC space made up of random models or models that do better than chance, one can show that the topological boundary of E is the set of elements of E for which at least one of the likelihood ratios is equal to 1. And random models are those models whose likelihood ratios are all equal to 1. When K=2, the boundary between models that do better than chance and bad models is equal to the set of random models (see article on the roc curve for more details), but it is strictly larger as soon as K>2. And if K=3, we can calculate the volume occupied by bad models in the ROC space: they occupy 90% of this space, whereas it's only 50% when K=2.


General algorithmic strategies

The existing multi-class classification techniques can be categorised into * transformation to binary * extension from binary * hierarchical classification.


Transformation to binary

This section discusses strategies for reducing the problem of multiclass classification to multiple binary classification problems. It can be categorized into ''one vs rest'' and ''one vs one''. The techniques developed based on reducing the multi-class problem into multiple binary problems can also be called problem transformation techniques.


One-vs.-rest

One-vs.-rest (OvR or ''one-vs.-all'', OvA or ''one-against-all'', OAA) strategy involves training a single classifier per class, with the samples of that class as positive samples and all other samples as negatives. This strategy requires the base classifiers to produce a real-valued score for its decision (see also
scoring rule In decision theory, a scoring rule provides evaluation metrics for probabilistic forecasting, probabilistic predictions or forecasts. While "regular" loss functions (such as mean squared error) assign a goodness-of-fit score to a predicted value ...
), rather than just a class label; discrete class labels alone can lead to ambiguities, where multiple classes are predicted for a single sample.In
multi-label classification In machine learning, multi-label classification or multi-output classification is a variant of the statistical classification, classification problem where multiple nonexclusive labels may be assigned to each instance. Multi-label classification ...
, OvR is known as ''binary relevance'' and the prediction of multiple classes is considered a feature, not a problem.
In pseudocode, the training algorithm for an OvR learner constructed from a binary classification learner is as follows: :Inputs: :* , a learner (training algorithm for binary classifiers) :* samples :* labels where ∈ is the label for the sample :Output: :*a list of classifiers for ∈ :Procedure: :*For each in :** Construct a new label vector where if and otherwise :** Apply to , to obtain Making decisions means applying all classifiers to an unseen sample and predicting the label for which the corresponding classifier reports the highest confidence score: :\hat = \underset\; f_k(x) Although this strategy is popular, it is a
heuristic A heuristic or heuristic technique (''problem solving'', '' mental shortcut'', ''rule of thumb'') is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless ...
that suffers from several problems. Firstly, the scale of the confidence values may differ between the binary classifiers. Second, even if the class distribution is balanced in the training set, the binary classification learners see unbalanced distributions because typically the set of negatives they see is much larger than the set of positives.


One-vs.-one

In the ''one-vs.-one'' (OvO) reduction, one trains binary classifiers for a -way multiclass problem; each receives the samples of a pair of classes from the original training set, and must learn to distinguish these two classes. At prediction time, a voting scheme is applied: all classifiers are applied to an unseen sample and the class that got the highest number of "+1" predictions gets predicted by the combined classifier. Like OvR, OvO suffers from ambiguities in that some regions of its input space may receive the same number of votes.


Extension from binary

This section discusses strategies of extending the existing binary classifiers to solve multi-class classification problems. Several algorithms have been developed based on
neural networks A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either Cell (biology), biological cells or signal pathways. While individual neurons are simple, many of them together in a netwo ...
,
decision trees A decision tree is a decision support system, decision support recursive partitioning structure that uses a Tree (graph theory), tree-like Causal model, model of decisions and their possible consequences, including probability, chance event ou ...
, k-nearest neighbors,
naive Bayes In statistics, naive (sometimes simple or idiot's) Bayes classifiers are a family of " probabilistic classifiers" which assumes that the features are conditionally independent, given the target class. In other words, a naive Bayes model assumes th ...
,
support vector machines In machine learning, support vector machines (SVMs, also support vector networks) are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laborato ...
and extreme learning machines to address multi-class classification problems. These types of techniques can also be called algorithm adaptation techniques.


Neural networks

Multiclass perceptrons provide a natural extension to the multi-class problem. Instead of just having one neuron in the output layer, with binary output, one could have N binary neurons leading to multi-class classification. In practice, the last layer of a neural network is usually a
softmax function The softmax function, also known as softargmax or normalized exponential function, converts a tuple of real numbers into a probability distribution of possible outcomes. It is a generalization of the logistic function to multiple dimensions, a ...
layer, which is the algebraic simplification of N logistic classifiers, normalized per class by the sum of the N-1 other logistic classifiers. Neural Network-based classification has brought significant improvements and scopes for thinking from different perspectives.


=Extreme learning machines

= Extreme learning machines (ELM) is a special case of single hidden layer feed-forward neural networks (SLFNs) wherein the input weights and the hidden node biases can be chosen at random. Many variants and developments are made to the ELM for multiclass classification.


k-nearest neighbours

k-nearest neighbors kNN is considered among the oldest non-parametric classification algorithms. To classify an unknown example, the distance from that example to every other training example is measured. The k smallest distances are identified, and the most represented class by these k nearest neighbours is considered the output class label.


Naive Bayes

Naive Bayes In statistics, naive (sometimes simple or idiot's) Bayes classifiers are a family of " probabilistic classifiers" which assumes that the features are conditionally independent, given the target class. In other words, a naive Bayes model assumes th ...
is a successful classifier based upon the principle of maximum a posteriori (MAP). This approach is naturally extensible to the case of having more than two classes, and was shown to perform well in spite of the underlying simplifying assumption of
conditional independence In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabi ...
.


Decision trees

Decision tree learning Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of obser ...
is a powerful classification technique. The tree tries to infer a split of the training data based on the values of the available features to produce a good generalization. The algorithm can naturally handle binary or multiclass classification problems. The leaf nodes can refer to any of the K classes concerned.


Support vector machines

Support vector machines In machine learning, support vector machines (SVMs, also support vector networks) are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laborato ...
are based upon the idea of maximizing the margin i.e. maximizing the minimum distance from the separating hyperplane to the nearest example. The basic SVM supports only binary classification, but extensions have been proposed to handle the multiclass classification case as well. In these extensions, additional parameters and constraints are added to the optimization problem to handle the separation of the different classes.


Multi expression programming

Multi expression programming Multi Expression Programming (MEP) is an evolutionary algorithm for generating mathematical functions describing a given set of data. MEP is a Genetic Programming variant encoding multiple solutions in the same chromosome. MEP representation is no ...
(MEP) is an evolutionary algorithm for generating computer programs (that can be used for classification tasks too). MEP has a unique feature: it encodes multiple programs into a single chromosome. Each of these programs can be used to generate the output for a class, thus making MEP naturally suitable for solving multi-class classification problems.


Hierarchical classification

Hierarchical classification Hierarchical classification is a system of grouping things according to a hierarchy. In the field of machine learning, hierarchical classification is sometimes referred to as instance space decomposition, which splits a complete multi-class clas ...
tackles the multi-class classification problem by dividing the output space i.e. into a
tree In botany, a tree is a perennial plant with an elongated stem, or trunk, usually supporting branches and leaves. In some usages, the definition of a tree may be narrower, e.g., including only woody plants with secondary growth, only ...
. Each parent node is divided into multiple child nodes and the process is continued until each child node represents only one class. Several methods have been proposed based on hierarchical classification.


Learning paradigms

Based on learning paradigms, the existing multi-class classification techniques can be classified into batch learning and online learning. Batch learning algorithms require all the data samples to be available beforehand. It trains the model using the entire training data and then predicts the test sample using the found relationship. The online learning algorithms, on the other hand, incrementally build their models in sequential iterations. In iteration t, an online algorithm receives a sample, xt and predicts its label ŷt using the current model; the algorithm then receives yt, the true label of xt and updates its model based on the sample-label pair: (xt, yt). Recently, a new learning paradigm called progressive learning technique has been developed. The progressive learning technique is capable of not only learning from new samples but also capable of learning new classes of data and yet retain the knowledge learnt thus far.


Evaluation

The performance of a multi-class classification system is often assessed by comparing the predictions of the system against reference labels with an evaluation metric. Common evaluation metrics are
Accuracy Accuracy and precision are two measures of ''observational error''. ''Accuracy'' is how close a given set of measurements (observations or readings) are to their ''true value''. ''Precision'' is how close the measurements are to each other. The ...
or macro F1.


See also

*
Binary classification Binary classification is the task of classifying the elements of a set into one of two groups (each called ''class''). Typical binary classification problems include: * Medical testing to determine if a patient has a certain disease or not; * Qual ...
* One-class classification *
Multi-label classification In machine learning, multi-label classification or multi-output classification is a variant of the statistical classification, classification problem where multiple nonexclusive labels may be assigned to each instance. Multi-label classification ...
* Multiclass perceptron *
Multi-task learning Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction ac ...


Notes


References

{{reflist Classification algorithms Statistical classification